The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] neural net(879hit)

801-820hit(879hit)

  • Application of an Improved Genetic Algorithm to the Learning of Neural Networks

    Yasumasa IKUNO  Hiroaki HAWABATA  Yoshiaki SHIRAO  Masaya HIRATA  Toshikuni NAGAHARA  Yashio INAGAKI  

     
    LETTER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    731-735

    Recently, the back propagation method, which is one of the algorithms for learning neural networks, has been widely applied to various fields because of its excellent characteristics. But it has drawbacks, for example, slowness of learning speed, the possibility of falling into a local minimum and the necessity of adjusting a learning constant in every application. In this article we propose an algorithm which overcomes some of the drawbacks of the back propagation by using an improved genetic algorithm.

  • A Method to Reduce Redundant Hidden Nodes

    Iwao SEKITA  Takio KURITA  David K. Y. CHIU  Hideki ASOH  

     
    PAPER-Network Synthesis

      Vol:
    E77-D No:4
      Page(s):
    443-449

    The number of nodes in a hidden layer of a feed-forward layered network reflects an optimality condition of the network in coding a function. It also affects the computation time and the ability of the network to generalize. When an arbitrary number of hidden nodes is used in designing the network, redundancy of hidden nodes often can be seen. In this paper, a method of reducing hidden nodes is proposed on the condition that a reduced network maintains the performances of the original network within an accepted level of tolerance. This method can be applied to estimate the performances of a network with fewer hidden nodes. The estimated performances indicate the lower bounds of the actual performances of the network. Experiments were performed using the Fisher's IRIS data, a set of SONAR data, and the XOR data for classification. The results suggest that sufficient number of hidden nodes, fewer than the original number, can be estimated by the proposed method.

  • AVHRR Image Segmentation Using Modified Backpropagation Algorithm

    Tao CHEN  Mikio TAKAGI  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    490-497

    Analysis of satellite images requires classificatio of image objects. Since different categories may have almost the same brightness or feature in high dimensional remote sensing data, many object categories overlap with each other. How to segment the object categories accurately is still an open question. It is widely recognized that the assumptions required by many classification methods (maximum likelihood estimation, etc.) are suspect for textural features based on image pixel brightness. We propose an image feature based neural network approach for the segmentation of AVHRR images. The learning algoriothm is a modified backpropagation with gain and weight decay, since feedforward networks using the backpropagation algorithm have been generally successful and enjoy wide popularity. Destructive algorithms that adapt the neural architecture during the training have been developed. The classification accuracy of 100% is reached for a validation data set. Classification result is compared with that of Kohonen's LVQ and basic backpropagation algorithm based pixel-by-pixel method. Visual investigation of the result images shows that our method can not only distinguish the categories with similar signatures very well, but also is robustic to noise.

  • Photometric Stereo for Specular Surface Shape Based on Neural Network

    Yuji IWAHORI  Hidekazu TANAKA  Robert J. WOODHAM  Naohiro ISHII  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    498-506

    This paper proposes a new method to determine the shape of a surface by learning the mapping between three image irradiances observed under illumination from three lighting directions and the corresponding surface gradient. The method uses Phong reflectance function to describe specular reflectance. Lambertian reflectance is included as a special case. A neural network is constructed to estimate the values of reflectance parameters and the object surface gradient distribution under the assumption that the values of reflectance parameters are not known in advance. The method reconstructs the surface gradient distribution after determining the values of reflectance parameters of a test object using two step neural network which consists of one to extract two gradient parameters from three image irradiances and its inverse one. The effectiveness of this proposed neural network is confirmed by computer simulations and by experiment with a real object.

  • A Stochastic Parallel Algorithm for Supervised Learning in Neural Networks

    Abhijit S. PANDYA  Kutalapatata P. VENUGOPAL  

     
    PAPER-Learning

      Vol:
    E77-D No:4
      Page(s):
    376-384

    The Alopex algorithm is presented as a universal learning algorithm for neural networks. Alopex is a stochastic parallel process which has been previously applied in the theory of perception. It has also been applied to several nonlinear optimization problems such as the Travelling Salesman Problem. It estimates the weight changes by using only a scalar cost function which is measure of global performance. In this paper we describe the use of Alopex algorithm for solving nonlinear learning tasks by multilayer feed-forward networks. Alopex has several advantages such as, ability to escape from local minima, rapid algorithmic computation based on a scalar cost function and synchronous updation of weights. We present the results of computer simulations for several tasks, such as learning of parity, encoder problems and the MONK's problems. The learning performance as well as the generalization capacity of the Alopex algorithm are compared with those of the backpropagation procedure, and it is shown that the Alopex has specific advantages over backpropagation. An important advantage of the Alopex algorithm is its ability to extract information from noisy data. We investigate the efficacy of the algorithm for faster convergence by considering different error functions. We show that an information theoretic error measure shows better convergence characteristics. The algorithm has also been applied to more complex practical problems such as undersea target recognition from sonar returns and adaptive control of dynamical systems and the results are discussed.

  • Extraction of Feature Attentive Regions in a Learnt Neural Network

    Hideki SANO  Atsuhiro NADA  Yuji IWAHORI  Naohiro ISHII  

     
    PAPER-Image Processing

      Vol:
    E77-D No:4
      Page(s):
    482-489

    This paper proposes a new method of extracting feature attentive regions in a learnt multi-layer neural network. We difine a function which calculates the degree of dependence of an output unit on an inpur unit. The value of this function can be used to investigate whether a learnt network detects the feature regions in the training patterns. Three computer simulations are presented: (1) investigation of the basic characteristic of this function; (2) application of our method to a simpie pattern classification task; (3) application of our method to a large scale pattern classfication task.

  • Estimation of Arm Posture in 3D-Space from Surface EMG Signals Using a Neural Network Model

    Yasuharu KOIKE  Mitsuo KAWATO  

     
    INVITED PAPER

      Vol:
    E77-D No:4
      Page(s):
    368-375

    We have aimed at constructing a forward dynamics model (FDM) of the human arm in the form of an artificial neural network while recordings of EMG and movement trajectories. We succeeded in: (1) estimating the joint torques under isometric conditions and (2) estimating trajectories from surface EMG signals in the horizontal plane. The human arm has seven degrees of freedom: the shoulder has three, the elbow has one and the wrist has three. Only two degrees of freedom were considered in the previous work. Moreover, the arm was supported horizontally. So, free movement in 3D space is still a necessity. And for 3D movements or posture control, compensation for gravity has to be considered. In this papre, four joint angles, one at the elbow and three at the shoulder were estimated from surface EMG signals of 12 flexor and extensor muscles during posture control in 3D space.

  • Iterative Middle Mapping Learning Algorithm for Cellular Neural Networks

    Chen HE  Akio USHIDA  

     
    PAPER-Neural Networks

      Vol:
    E77-A No:4
      Page(s):
    706-715

    In this paper, a middle-mapping learning algorithm for cellular associative memories is presented. This algorithm makes full use of the properties of the cellular neural network so that the associative memory has some advantages compared with the memory designed by the ourter product method. It can guarantee each prototype is stored at an equilibrium point. In the practical implementation, it is easy to build up the circuit because the weight matrix presenting the connection between cells is not symmetric. The synchronous updating rule makes its associative speed very fast compared to the Hopfield associative memory.

  • Range Image Segmentation Using Multiple Markov Random Fields

    In Gook CHUN  Kyu Ho PARK  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:3
      Page(s):
    306-316

    A method of range image segmentation using four Markov random field(MRF)s is described in this paper. MRFs are used in depth smoothing, gradient smoothing, edge detection and surface type labeling stage. First, range and its gradient images are smoothed preserving jump and roof edges respectively using line process concept one after another. Then jump and roof edges are extracted, combined and refined using penalizing undesirable edge patterns. Finally, curvatures are computed and the surface types are labeled according to the signs of principal curvatures. The surface type labels are refined using winner-takes-all layers in the stage. The final output is a set of regions with its exact surface type. The energy function is used in order to represent constraints of each stage and the minimum energy state is found using iterative method. Several experimental results show the generality of our approach and the execution speed of the proposed method is faster than that of a typical region merging method. This promises practical applications of our method.

  • Automatic Color Segmentation Method Using a Neural Network Model for Stained Images

    Hironori OKII  Noriaki KANEKI  Hiroshi HARA  Koichi ONO  

     
    PAPER-Bio-Cybernetics

      Vol:
    E77-D No:3
      Page(s):
    343-350

    This paper describes a color segmentation method which is essential for automatic diagnosis of stained images. This method is applicable to the variance of input images using a three-layered neural network model. In this network, a back-propagation algorithm was used for learning, and the training data sets of RGB values were selected between the dark and bright images of normal mammary glands. Features of both normal mammary glands and breast cancer tissues stained with hematoxylin-eosin (HE) staining were segmented into three colors. Segmented results indicate that this network model can successfully extract features at various brightness levels and magnifications as long as HE staining is used. Thus, this color segmentation method can accommodate change in brightness levels as well as hue values of input images. Moreover, this method is effective to the variance of scaling and rotation of extracting targets.

  • Identification of Chaotic Dynamical Systems with Back-Propagation Neural Networks

    Masaharu ADACHI  Makoto KOTANI  

     
    PAPER-Nonlinear Phenomena and Analysis

      Vol:
    E77-A No:1
      Page(s):
    324-334

    In this paper, we clarify fundamental properties of conventional back-propagation neural networks to learn chaotic dynamical systems by some numerical experiments. We train three-layers networks using back-propagation algorithm with the data from two examples of two-dimensional discrete dynamical systems. We qualitatively evaluate the trained networks with two methods analysing geometrical mapping structure and reconstruction of an attractor by the recurrent feedback of the networks. We also quantitatively evaluate the trained networks with calculation of the Lyapunov exponents that represent the dynamics of the recurrent networks is chaotic or periodic. In many cases, the trained networks show high ability of extracting mapping structures of original two-dimensional dynamical systems. We confirm that the Lyapunov exponents of the trained networks correspond to whether the reconstructed attractors by the recurrent networks are chaotic or periodic.

  • A Current-Mode Implementation of a Chaotic Neuron Model Using a SI Integrator

    Nobuo KANOU  Yoshihiko HORIO  Kazuyuki AIHARA  Shogo NAKAMURA  

     
    LETTER-Nonlinear Circuits and Systems

      Vol:
    E77-A No:1
      Page(s):
    335-338

    This paper presents an improved current-mode circuit for implementation of a chaotic neuron model. The proposed circuit uses a switched-current integrator and a nonlinear output function circuit, which is based on an operational transconductance amplifier, as building blocks. Is is shown by SPICE simulations and experiments using discrete elements that the proposed circuit well replicates the behavior of the chaotic neuron model.

  • Optical Associative Memory Using Optoelectronic Neurochips for Image Processing

    Masaya OITA  Yoshikazu NITTA  Shuichi TAI  Kazuo KYUMA  

     
    PAPER

      Vol:
    E77-C No:1
      Page(s):
    56-62

    This paper presents a novel model of optical associative memory using an optoelectronic neurochips, which detects and processes a two-dimensional input image at the same time. The original point of this model is that the optoelectronic neurochips allow direct image processing in terms of parallel input/output interface and parallel neural processing. The operation principle is based on the nonlinear transformation of the input image to the corresponding the point attractor of a fully connected neural network. The learning algorithm is the simulated annealing and the energy of the network state is used as its cost function. The computer simulations show its usefulness and that the maximum number of stored images is 150 in the network with 64 neurons. Moreover, we experimentally demonstrate an optical implementation of the model using the optoelectronic neurochip. The chip consists of two-dimensional array of variable sensitivity photodetectors with 8 16 elements. The experimental results shows that 3 images of size 8 8 were successfully stored in the system. In the case of the input image of size 64 64, the estimated processing speed is 100 times higher than that of the conventional optoelectronic neurochips.

  • An Autocorrelation Associative Neural Network with Self-Feedbacks

    Hiroshi UEDA  Masaya OHTA  Akio OGIHARA  Kunio FUKUNAGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2072-2075

    In this article, the autocorrelation associative neural network that is one of well-known applications of neural networks is improved to extend its capacity and error correcting ability. Our approach of the improvement is based on the consideration that negative self-feedbacks remove spurious states. Therefore, we propose a method to determine the self-feedbacks as small as possible within the range that all stored patterns are stable. A state transition rule that enables to escape oscillation is also presented because the method has a possibility of falling into oscillation. The efficiency of the method is confirmed by means of some computer simulations.

  • A Neural Network with a Function of lnhibiting Subtours on TSP

    Akira YAMAMOTO  Masaya OHTA  Hiroshi UEDA  Akio OGIHARA  Kunio FUKUNAGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2068-2071

    The Traveling Salesman Problem (TSP) can be solved by a neural network using the coding scheme based on the adjacency of city in the tour. Using this coding scheme, the neural network generates a better solution than that using other coding schemes. We, however, often get the invalid solution consisting of some subtours. In this article, we propose a method of eliminating subtours using additional neurons. On the computer simulation it is shown that we get the optimum solution by means of taking only O(n2) additional neurons and trials.

  • Data Compression of Long Time ECG Recording Using BP and PCA Neural Networks

    Yasunori NAGASAKA  Akira IWATA  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1434-1442

    The performances of BPNN (neural network trained by back propagation) and PCANN (neural network which computes principal component analysis) for ECG data compression have been investigated from several points of view. We have compared them with an existing data compression method TOMEK. We used MIT/BIH arrhythmia database as ECG data. Both BPNN and PCANN showed better results than TOMEK. They showed 1.1 to 1.4 times higher compression than TOMEK to achieve the same accuracy of reproduction (13.0% of PRD and 99.0% of CC). While PCANN showed better learning ability than BPNN in simple learning task, BPNN was a little better than PCANN regarding compression rates. Observing the reproduced waveforms, BPNN and PCANN had almost the same performance, and they were superior to TOMEK. The following characteristics were obtained from the experiments. Since PCANN is sensitive to the learning rate, we had to precisely control the learning rate while the learning is in progress. We also found the tendency that PCANN needs larger amount of iteration in learning than BPNN for getting the same performance. PCANN showed better learning ability than BPNN, however, the total learning cost were almost the same between BPNN and PCANN due to the large amount of iteration. We analyzed the connection weight patterns. Since PCANN has a clear mathematical background, its behavior can be explained theoretically. BPNN sometimes generated the connection weights which were similar to the principal components. We supposed that BPNN may occasionally generate those patterns, and performs well while doing that. Finally we concluded as follows. Although the difference of the performances is smal, it was always observed and PCANN never exceeded BPNN. When the ease of analysis or the relation to mathematics is important, PCANN is suitable. It will be useful for the study of the recorded data such as statistics.

  • Physiologically-Based Speech Synthesis Using Neural Networks

    Makoto HIRAYAMA  Eric Vatikiotis-BATESON  Mitsuo KAWATO  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1898-1910

    This paper focuses on two areas in our effort to synthesize speech from neuromotor input using neural network models that effect transforms between cognitive intentions to speak, their physiological effects on vocal tract structures, and subsequent realization as acoustic signals. The first area concerns the biomechanical transform between motor commands to muscles and the ensuing articulator behavior. Using physiological data of muscle EMG (electromyography) and articulator movements during natural English speech utterances, three articulator-specific neural networks learn the forward dynamics that relate motor commands to the muscles and motion of the tongue, jaw, ant lips. Compared to a fully-connected network, mapping muscle EMG and motion for all three sets of articulators at once, this modular approach has improved performance by reducing network complexity and has eliminated some of the confounding influence of functional coupling among articulators. Network independence has also allowed us to identify and assess the effects of technical and empirical limitations on an articulator-by-articulator basis. This is particularly important for modeling the tongue whose complex structure is very difficult to examine empirically. The second area of progress concerns the transform between articulator motion and the speech acoustics. From the articulatory movement trajectories, a second neural network generates PARCOR (partial correlation) coefficients which are then used to synthesize the speech acoustics. In the current implementation, articulator velocities have been added as the inputs to the network. As a result, the model now follows the fast changes of the coefficients for consonants generated by relatively slow articulatory movements during natural English utterances. Although much work still needs to be done, progress in these areas brings us closer to our goal of emulating speech production processes computationally.

  • Generalization Ability of Extended Cascaded Artificial Neural Network Architecture

    Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:10
      Page(s):
    1877-1883

    We present an extension of the previously proposed 3-layer feedforward network called a cascaded network. Cascaded networks are trained to realize category classification employing binary input vectors and locally represented binary target output vectors. To realize a nonlinearly separable task the extended cascaded network presented here is consreucted by introducing high order cross producted inputs at the input layer. In the construction of the cascaded network, two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be understood as a hidden layer which is trained to attain preassigned saturated outputs in response to the training set. In a cascaded network trained to categorize binary image patterns, saturation of hidden outputs reduces the effect of corrupted disturbances presented in the input. We demonstrated that the extended cascaded network was able to realize a nonlinearly separable task and yielded better generalization ability than the Backpropagation network.

  • Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System

    Ali M. ALHAJ  Hiroaki TERADA  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:10
      Page(s):
    1804-1811

    High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.

  • A New Neural Network Algorithm with the Orthogonal Optimized Parameters to Solve the Optimal Problems

    Dao Heng YU  Jiyou JIA  Shinsaku MORI  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:9
      Page(s):
    1520-1526

    In this paper, a definitce relation between the TSP's optimal solution and the attracting region in the parameters space of TSP's energy function is discovered. An many attracting region relating to the global optimal solution for TSP is founded. Then a neural network algorithm with the optimized parameters by using Orthogonal Array Table Method is proposed and used to solve the Travelling Salesman Problem (TSP) for 30, 31 and 300 cities and Map-coloring Problem (MCP). These results are very satisfactory.

801-820hit(879hit)